MMM: Multi-Stage Multi-Task Learning for Multi-Choice Reading Comprehension
نویسندگان
چکیده
منابع مشابه
Multi-Stage Multi-Task Feature Learning
Multi-task sparse feature learning aims to improve the generalization performance by exploiting the shared features among tasks. It has been successfully applied to many applications including computer vision and biomedical informatics. Most of the existing multi-task sparse feature learning algorithms are formulated as a convex sparse regularization problem, which is usually suboptimal, due to...
متن کاملMulti-Stage Multi-Task Learning with Reduced Rank
Multi-task learning (MTL) seeks to improve the generalization performance by sharing information among multiple tasks. Many existing MTL approaches aim to learn the lowrank structure on the weight matrix, which stores the model parameters of all tasks, to achieve task sharing, and as a consequence the trace norm regularization is widely used in the MTL literature. A major limitation of these ap...
متن کاملMulti-Mention Learning for Reading Comprehension with Neural Cascades
Reading comprehension is a challenging task, especially when executed across longer or across multiple evidence documents, where the answer is likely to reoccur. Existing neural architectures typically do not scale to the entire evidence, and hence, resort to selecting a single passage in the document (either via truncation or other means), and carefully searching for the answer within that pas...
متن کاملMulti-Objective Multi-Task Learning
This dissertation presents multi-objective multi-task learning, a new learning framework. Given a fixed sequence of tasks, the learned hypothesis space must minimize multiple objectives. Since these objectives are often in conflict, we cannot find a single best solution, so we analyze a set of solutions. We first propose and analyze a new learning principle, empirically efficient learning. From...
متن کاملMulti-Task Multi-Sample Learning
In the exemplar SVM (E-SVM) approach of Malisiewicz et al., ICCV 2011, an ensemble of SVMs is learnt, with each SVM trained independently using only a single positive sample and all negative samples for the class. In this paper we develop a multi-sample learning (MSL) model which enables joint regularization of the E-SVMs without any additional cost over the original ensemble learning. The adva...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the AAAI Conference on Artificial Intelligence
سال: 2020
ISSN: 2374-3468,2159-5399
DOI: 10.1609/aaai.v34i05.6310